Mary Donovan and Sharon Solis
April 9, 2019
Why and when to use HPC?
Designed for when computational problems are either too large, take too long, and/or require large file storage for standard computers
When HPC might not be your solution:
Campus available cluster Knot (CentOS/RH 6):
110 node, ~1400 core system
4 ‘fat nodes’(1TB RAM)
GPU nodes (12 M2050’s) (now too old)
Campus available cluster Pod (CentOS/RH7):
70 node, ~2600 core system
4 ‘fat nodes’(1TB RAM)
GPU nodes (3) (Quad NVIDIA V100/32 GB with NVLINK)
GPU Development node (P100, 1080Ti, Titan V)
Condo clusters: (PI’s buy compute nodes)
Guild (60 nodes)
Braid (120 nodes, also has GPUs)
accounts
Request access: http://csc.cnsi.ucsb.edu/acct
Xsede: NSF sponsored service organization that provides access to computing resources.
Campus Champion (Sharon Solis): Represents XSEDE on the campus
showqssh username@pod.cnsi.ucsb.edu
scp file.txt user@pod.cnsi.ucsb.edu:file_copy.txt
Lets also make a quick R code to run
echo “data <- data.frame(x=seq(1:10),y=seq(1:10)); write.csv(data,”testcsv.csv”,row.names=F)“ > myscript.R
nano submit.job
#!/bin/bash -l
#Serial (1 core on one node) job...
#SBATCH --nodes=1 --ntasks-per-node=1
cd $SLURM_SUBMIT_DIR
module load R
Rscript myscript.R
sbatch submit.job
qstat -u mdono
scp file.txt user@pod.cnsi.ucsb.edu: file_copy.txt